Version Space Support Vector Machines
نویسندگان
چکیده
The task of reliable classification is to determine if a particular instance classification is reliable. There exist two approaches to the task: the Bayesian framework [3] and the typicalness framework [5]. Although both frameworks are useful, the Bayesian framework can be misleading and the typicalness framework is classifier dependent. To overcome these problems we argue to use version spaces (VSs) [3] for reliable classification. The key idea is to construct VSs containing hypotheses of the target class or of its close approximations. In this way the unanimous-voting classification rule of VSs does not misclassify instances; i.e., instance classifications become reliable. To construct VSs with the property above we propose an approach extending the volumes of VSs s.t. instance misclassifications are blocked. The approach and VSs are realized using support vector machines (SVMs) [4]. The combination is called version space support vector machines (VSSVMs). We show that VSSVMs are able to outperform the existing approaches to reliable classification.
منابع مشابه
Unanimous Voting using Support Vector Machines
This paper proposes a new approach to classification reliability. The key idea is to maintain version spaces containing (close approximations of) the target classifiers. In this way the unanimous-voting rule applied on these version spaces outputs reliable instance classifications. Version spaces are defined in a hypothesis space of oriented hyperplanes. The unanimous-voting rule is implemented...
متن کاملVersion Space Support Vector Machines: An Extended Paper
We argue to use version spaces as an approach to reliable classification. The key idea is to construct version spaces containing the hypotheses of the target concept or of its close approximations. As a result the unanimous-voting classification rule of version spaces does not misclassify; i.e., instance classifications become reliable. We propose to implement version spaces using support vecto...
متن کاملBayesian Learning in Reproducing Kernel Hilbert Spaces
Support Vector Machines nd the hypothesis that corresponds to the centre of the largest hypersphere that can be placed inside version space, i.e. the space of all consistent hypotheses given a training set. The boundaries of version space touched by this hypersphere de ne the support vectors. An even more promising approach is to construct the hypothesis using the whole of version space. This i...
متن کاملBayes Point Machines: Estimating the Bayes Point in Kernel Space
From a Bayesian perspective Support Vector Machines choose the hypothesis corresponding to the largest possible hypersphere that can be inscribed in version space, i.e. in the space of all consistent hypotheses given a training set. Those boundaries of version space which are tangent to the hypersphere define the support vectors. An alternative and potentially better approach is to construct th...
متن کاملA Comparative Study of Extreme Learning Machines and Support Vector Machines in Prediction of Sediment Transport in Open Channels
The limiting velocity in open channels to prevent long-term sedimentation is predicted in this paper using a powerful soft computing technique known as Extreme Learning Machines (ELM). The ELM is a single Layer Feed-forward Neural Network (SLFNN) with a high level of training speed. The dimensionless parameter of limiting velocity which is known as the densimetric Froude number (Fr) is predicte...
متن کاملReliability yields Information Gain
In this paper we prove that the reliability of the classifications of individual instances, provided by a classifier, results in information gain with respect to the accuracy of the classifier. We illustrate this result using our new approach to classification reliability called version space support vector machines.
متن کامل